Goto

Collaborating Authors

 atomic bomb


Human-level AI is not inevitable. We have the power to change course Garrison Lovely

The Guardian

"Technology happens because it is possible," OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had.


What is a Godzilla anyway? The 70-year-old monster behind the movies

Al Jazeera

This is the second time Godzilla and King Kong have made a film appearance together in recent times with 2021's Godzilla vs Kong being the first instalment. Both films were directed by Adam Wingard. Godzilla x Kong made back its budget of 135m in the first weekend when it took in 195m at cinemas, according to figures from Box Office Mojo. In total, it has sold 209m in tickets so far and has scored a very respectable 92 percent Rotten Tomatoes audience rating. The origins of Godzilla go back 70 years to the first 1954 film release in Tokyo, Japan – Gojira, directed by Ishiro Honda.


AI ranks EVERY Christopher Nolan movie - after director took home first-ever Oscar for Oppenheimer... so do YOU agree with ChatGPT?

Daily Mail - Science & tech

'Oppenheimer' swept away the competition at the 2024 Oscars, receiving seven awards including earning renowned director Christopher Nolan his first golden man statuette. While this is the filmmaker's first major award-winning film, he has been producing movies since 1998 when he made Following - and has made 10 more since. We asked ChatGPT to rank his other 11 films dating back to 26 years to the'Following' and 2010 film'Inception' up through his his 2012 film'The Dark Knight Rises' and his 2020 film'Tenet.' Renowned director Christopher Nolan took home his first Oscar for his critically acclaimed film, ' Oppenheimer.' The historic film starred Cillian Murphy as J Robert Oppenheimer, the director of the Los Alamos lab that designed and built the world's first atomic bomb during World War II - he is often known as the'father of the atomic bomb' Oppenheimer swept the box office when it was released on July 21, 2023, reeling in a whopping 82.4 million in opening weekend, winning Nolan Best Picture and Best Director during Sunday's award show.


The Year We Embraced Our Destruction

The Atlantic - Technology

The sounds came out of my mouth with an unexpected urgency. The cadence was deliberate--more befitting of an incantation than an order: one large strawberry-lemon-mint Charged Lemonade. The words hung in the air for a moment, giving way to a stillness punctuated only by the soft whir of distant fluorescent lights and the gentle hum of a Muzak cover of Bruce Hornsby's "Mandolin Rain." The time was 9:03 a.m.; the sun had been up for only one hour. I watched the kind woman behind the counter stifle an eye roll, a small mercy for which I will be eternally grateful.


Artificial intelligence and US nuclear weapons decisions: How big a role?

FOX News

FOX News contributor Dr. Rebecca Grant tells'FOX News Live' that she believes tensions in the Middle East can be contained to just Israel. The Pentagon announced a new tactical nuclear bomb program on Oct. 27. Rep. Mike Rogers, R-Ala., and Sen. John Wicker, R-Miss., welcomed the new bomb because it "will better allow the Air Force to reach hardened and deeply-buried targets" in Europe and the Pacific. This B61-13 variant is designed for heavy blast against nasty targets such as underground enemy nuclear missile sites. And by the time the bomb is ready after the late 2020s, AI may have a hand in how and when it's detonated.


Scientists sound alarm as NASA says small chance asteroid 'Bennu' the size of the Empire State Building could smash into earth: 'It would be like unleashing 24 atomic bombs'

Daily Mail - Science & tech

NASA has spent seven years trying to prevent Bennu -- an asteroid taller than the Empire State Building and named after ancient Egypt's fiery bird-god -- from crashing cataclysmically into Earth. While Bennu's chances of impact are just 1-in-2,700, more than five times a person's chance of being struck by lightning, NASA's team nevertheless has categorized it as one of the two'most hazardous known asteroids.' In a worst-case scenario, the roughly 510-meter wide, carbon-based behemoth would smash into Earth with 1,200 megatons of energy: 24 times the power of the largest nuclear bomb ever detonated (the Soviet Union's'Tsar Bomba'). If it happens, Bennu's impact would unleash its 1.2 gigaton impact 159 years from this Sunday, on September 24, 2182. While Bennu is nowhere near the size of the dino-killing, six-mile across space rock that hit the Yucatan 66 million years ago, astronomers believe that the asteroid'could cause continental devastation if it became an Earth impactor.'


The AI Doomsday Bible Is a Book About the Atomic Bomb

WIRED

In December 1938, two German chemists working on the ground floor of a grand research institute in suburban Berlin accidentally ushered the nuclear era into existence. They were bombarding uranium with radiation to see what substances this process created--just another experiment in a long string of assays trying to figure out the strange physics of the radioactive metal. What Hahn and Strassman ended up discovering was nuclear fission--splitting uranium atoms in two and releasing the enormous energy locked up within the atomic nucleus. To nuclear physicists, the implications of this strange experiment were immediately obvious. In January 1939, the Danish physicist Niels Bohr carried the news across the Atlantic to a conference in Washington, DC, where scientists were stunned by the findings. "It is a profound and necessary truth that the deep things in science are not found because they are useful.


The Man Who Wrote the AI Doomer Bible

The Atlantic - Technology

A framed photograph of three men in military fatigues hangs above his desk. They're tightening straps on what first appear to be two water heaters but are, in fact, thermonuclear weapons. Resting against a nearby wall is a black-and-white print depicting the first billionth of a second after the detonation of an atomic bomb: a thousand-foot-tall ghostly amoeba. And above us, dangling from the ceiling like the sword of Damocles, is a plastic model of the Hindenburg. Depending on how you choose to look at it, Rhodes's office is either a shrine to awe-inspiring technological progress or a harsh reminder of its power to incinerate us all in the blink of an eye.


Why I'm Not Worried About A.I. Killing Everyone and Taking Over the World

Slate

This article was co-published with Understanding AI, a newsletter that explores how A.I. works and how it's changing our world. Geoffrey Hinton is a legendary computer scientist whose work laid the foundation for today's artificial intelligence technology. He was a co-author of two of the most influential A.I. papers: a 1986 paper describing a foundational technique (called backpropagation) that is still used to train deep neural networks and a 2012 paper demonstrating that deep neural networks could be shockingly good at recognizing images. That 2012 paper helped to spark the deep learning boom of the last decade. Google hired the paper's authors in 2013 and Hinton has been helping Google develop its A.I. technology ever since then. But last week Hinton quit Google so he could speak freely about his fears that A.I. systems would soon become smarter than us and gain the power to enslave or kill us. "There are very few examples of a more intelligent thing being controlled by a less intelligent thing," Hinton said in an interview on CNN last week.


'A race it might be impossible to stop': how worried should we be about AI?

The Guardian

Last Monday an eminent, elderly British scientist lobbed a grenade into the febrile anthill of researchers and corporations currently obsessed with artificial intelligence or AI (aka, for the most part, a technology called machine learning). The scientist was Geoffrey Hinton, and the bombshell was the news that he was leaving Google, where he had been doing great work on machine learning for the last 10 years, because he wanted to be free to express his fears about where the technology he had played a seminal role in founding was heading. To say that this was big news would be an epic understatement. The tech industry is a huge, excitable beast that is occasionally prone to outbreaks of "irrational exuberance", ie madness. One recent bout of it involved cryptocurrencies and a vision of the future of the internet called "Web3", which an astute young blogger and critic, Molly White, memorably describes as "an enormous grift that's pouring lighter fluid on our already smoldering planet".